#dnl16 #AITRAPS · Berlin · June 14-15 · 2019:

AI TRAPS: Automating Discrimination

The Art of Exposing Injustice - Part 2



A close look at how AI & algorithms reinforce prejudices and biases of its human creators and societies, and how to fight discrimination.

Curated by Tatiana Bazzichelli. In cooperation with Transparency International



The 16th conference of the Disruption Network Lab

June 14: 16.00-20.45; June 15: 15.30-20-30
Location
: Studio 1, Kunstquartier Bethanien, Mariannenplatz 2, 10997 Berlin.
Partner Venues: Kunstraum Kreuzberg/Bethanien, STATE Studio.
Curated by Tatiana Bazzichelli. In cooperation with: Transparency International.

Funded by: Hauptstadtkulturfonds (Capital Cultural Fund of Berlin), Reva and David Logan Foundation (grant provided by NEO Philanthropy), Checkpoint Charlie Foundation. Supported [in part] by a grant from the Open Society Initiative for Europe within the Open Society Foundations. In partnership with: Friedrich Ebert Stiftung.

In collaboration with: Alexander von Humboldt Institute for Internet and Society (HIIG), r0g agency. Communication Partners: Sinnwerkstatt, Furtherfield. Media partners: taz, die tageszeitung, Exberliner.
In English language.

2-Day Online-Ticket: 14€  1-Day Ticket: 8€
1-Day Solidarity-Ticket: 5€ (only available at the door)
Disruption Network Lab aims for an accessible and inclusive conference by providing a discounted solidarity ticket. This will only be available at the door.


SCHEDULE

Friday, June 14 · 2019

15:30 - Doors open

16:00 - INTRO

Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE) & Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

16:15-17:30 - PANEL

THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science

Adam Harvey (Artist and Researcher, US/DE), Sophie Searcy (Senior Data Scientist at Metis, US). Moderated by Adriana Groh (Head of Program Management, Prototype Fund, DE).

17:45-19:00 - PANEL

AI FOR THE PEOPLE: AI Bias, Ethics & The Common Good

Maya Indira Ganesh (Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE), Slava Jankin (Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE). Moderated by Nicole Shephard (Researcher on Gender, Technology and Politics of Data, UK/DE).

19:15-20:45 - KEYNOTE

WHAT IS A FEMINIST AI? Possible Feminisms, Possible Internets

Charlotte Webb (Co-founder, Feminist Internet & Even Consultancy, UK). Moderated by Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

Saturday, June 15 · 2019

15:00 - Doors open

15:30-16:30 - INVESTIGATION

HOW IS GOVERNMENT USING BIG DATA?

Crofton Black (Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK). Moderated by Daniel Eriksson (Head of Technology, Transparency International, SE/DE).

16:45-18:15 - KEYNOTE

RACIAL DISCRIMINATION IN THE AGE OF AI: The Future of Civil Rights in the United States

Mutale Nkonde (Tech Policy Advisor and Fellow at Data & Society Research Institute, US). Moderated by Rhianna Ilube (Writer, Curator and Host at The Advocacy Academy, UK/DE)

18:30-20:30 - PANEL

ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism

Dia Kayyali (Leader of the Tech & Advocacy program at WITNESS, SY/US/DE), Os Keyes (Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US), Dan McQuillan (Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK).
Moderated by Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE).


INSTALLATION

WE NEED TO TALK, AI

A Comic Essay on Artificial Intelligence by Julia Schneider and Lena Kadriye Ziyal. www.weneedtotalk.ai


AI TRAPS: Automating Discrimination

The Art of Exposing Injustice - Part 2

A close look at how AI & algorithms reinforce prejudices and biases of its human creators and societies, and how to fight discrimination.

The notion of Artificial Intelligence goes back decades ago and became a field of research in the United States in the mid-1950s. During the 1990s AI was also at the core of many debates around digital culture, cyberculture and the imaginary of future technologies. In the current discussion around big data, deep learning, neural networks, and algorithms, AI has been used as a buzzword for proposing new political and commercial agendas in companies, institutions and the public sector.

This conference does not want to address the concept of AI in general, but it aims to focus on concrete applications of data science, machine learning, and algorithms and the “AI traps” that might follow when the design of these systems reflects, reinforces and automates current and historical biases and inequalities of society.

The aim is to foster a debate on how AI and algorithms impact our everyday life as well as culture, politics, institutions and behaviours, reflecting inequalities based on social, racial and gender prejudices. Computer systems can be influenced by implicit values of humans involved in data collection, programming and usage. Algorithms are not neutral and unbiased, and the consequence of historical patterns and individual decisions can be embedded in search engine results, social media platforms and software applications reflecting systematic and unfair discrimination.

The analysis of algorithmic bias implies a close investigation of network structures and multiple layers of computational systems. The consequences of digitalisation are not just bounded to technology, but are affecting our society and culture at large. AI bias could be increased by the way machines work, taking unpredictable directions as soon as software are implemented and run by their own.

By connecting researchers, writers, journalists, computer scientists and artists, this event wants to demystify the conception of artificial intelligence being pure and logical, focusing instead on how AI suffers from prejudices and biases of its human creators, and how machine learning may produce inequality as a consequence of mainstream power structures that overlook diversity and minorities.

This conference aims to provoke awareness by reflecting on possible countermeasures that come from the artistic, technological and political framework, critical reflecting on the usage and implementation of AI technology.

Curated by Tatiana Bazzichelli.
The Disruption Network Lab series The Art of Exposing Injustice is developed in cooperation with the Berlin-based International Secretariat of
Transparency International, the global coalition against corruption, celebrating this year its 25th year jubilee.


Full program

Friday, June 14 · 2019

15:30 - Doors open

16:00 - INTRO

Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE) & Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

16:15-17:30 - PANEL

THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science

Adam Harvey (Artist and Researcher, US/DE), Sophie Searcy (Senior Data Scientist at Metis, US). Moderated by Adriana Groh (Head of Program Management, Prototype Fund, DE).

Examples abound of AI creating harmful effects and it is good to track and understand these examples. But the next step is harder: how do the practitioners of AI improve what comes next and avoid ever worse and ever bigger “AI catastrophes”? Adam Harvey will present a very concrete art and research project, investigating the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies. He will present the latest developments of his project megapixels.cc, showcasing new research on how images posted to Flickr have been used by academic, commercial, and even defense and intelligence agencies around the world for research and development of facial recognition technologies. Drawing on her teaching work on AI ethics and cultural diversity, Sophie Searcy will discuss the fundamental problems underlying the design and implementation of AI and consider how those problems impact to real-world case studies. She will stress out the importance to foster an equal and representative data science community filled with individuals of all technical, educational, and personal backgrounds, understanding the implications of the effect of data science on society at large.

17:45-19:00 - PANEL

AI FOR THE PEOPLE: AI Bias, Ethics & The Common Good

Maya Indira Ganesh (Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE), Slava Jankin (Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE). Moderated by Nicole Shephard (Researcher on Gender, Technology and Politics of Data, UK/DE).

This panel wants to investigate possible solutions to the known challenges related to AI bias, and to discuss the opportunities AI might bring to the public. What can the role of companies, institutions and universities be on how to deal with AI responsibly for the common good? How are public institutions dealing with the ethical usage of AI and what is actually happening on the ground? Maya Indira Ganesh will focus on the seductive idea that we can standardise and manage well-being, goodness, and ethical behaviour in this algorithmically mediated moment. Her talk will examine typologies of policy, computational, industrial, legal, and creative approaches to shaping ‘AI ethics’ and bias-free algorithms; and critically reflect on the breathless enthusiasm for principles, boards and committees to ensure that AI is ethical. Slava Jankin will reflect on how machine learning can be used for common good in the public sector, focusing on Artificial Intelligence and data science in public services and reflecting on possible products and design implementations.

19:15-20:45 - KEYNOTE

WHAT IS A FEMINIST AI? Possible Feminisms, Possible Internets

Charlotte Webb (Co-founder, Feminist Internet & Even Consultancy, UK). Moderated by Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

What kind of new socio-political imaginary can be instantiated by attempting to design a Feminist Internet? How can feminist methods and values inform the development of less biased technologies? What is a feminist AI? In this keynote, Charlotte Webb will discuss how a collection of artists, designers and creative technologists have been using feminisms, creative practice and technology to explore these questions. She will discuss the challenges of designing a ‘Feminist Alexa’, which Feminist Internet has been attempting in response to the ways biased voice technologies are saturating markets and colonising homes across the globe. She will discuss how the use of feminist design standards can help ensure that technologies do not knowingly or unknowingly reproduce bias, and introduce the audience to Feminist Internet’s most recent project – a feminist chatbot that aims to teach people about AI bias.

Saturday, June 15 · 2019

15:00 - Doors open

15:30-16:30 - INVESTIGATION

HOW IS GOVERNMENT USING BIG DATA?

Crofton Black (Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK). Moderated by Daniel Eriksson (Head of Technology, Transparency International, SE/DE).

AI, algorithms, deep learning, big data - barely a week goes by without a new revelation about our increasingly digital future. Computers will cure cancer, make us richer, prevent crime, decide who gets into the country, determine access to services, map our daily movements, take our jobs away and send us to jail. Successive innovations spark celebration and concern. While new developments offer enticing economic benefits, academics and civil society sound warnings over corporate accountability, the intrusiveness of personal data and the ability of legal frameworks to keep pace with technological challenges. These concerns are particularly acute when it comes to the use of digital technology by governments and the public sector, which are compiling ever larger datasets on citizens as they move towards an increasingly digitised future. Questions abound about what governments are doing with data, who they are paying to do the work, and what the potential outcomes could be, especially for society's most vulnerable people. In May, Crofton Black and Cansu Safak of The Bureau of Investigative Journalism published a report, ‘Government Data Systems: The Bureau Investigates’, examining what IT systems the UK government has been buying. Their report looks at how to use publicly available data to build a picture of companies, services and projects in this area, through website scraping, automated searches, data analysis and freedom of information requests. In this session Crofton Black will present their work and their findings, and discuss systemic problems of transparency over how the government is spending public money. Report: How is government using big data? The Bureau Investigates.

16:45-18:15 - KEYNOTE

RACIAL DISCRIMINATION IN THE AGE OF AI: The Future of Civil Rights in the United States

Mutale Nkonde (Tech Policy Advisor and Fellow at Data & Society Research Institute, US). Moderated by Rhianna Ilube (Writer, Curator and Host at The Advocacy Academy, UK/DE)

To many, the questions posed to Mark Zuckerberg, during the Facebook Congressional Hearings displayed U.S. House and Senate Representatives’ lack of technical knowledge. However, legislative officials rely on the expertise of their legislative teams to prepare them for briefings. What the Facebook hearings actually proved were low levels of digital literacy among legislative staffers. Mutale Nkonde will address the epistemological journey taken by congressional staffers about the impact of AI technologies on wider society. Working with low income black communities in New York City, who are fighting the use of facial recognition in public housing, she targets staffers of the Congressional Black Caucus with the goal to advocate for the fair treatment of Black Americans in the United States. She aims to make congressional staffers concerned how police jurisdictions, public housing landlords, retailers, and others have proposed using facial recognition technology as a weapon against African Americans and other people of colour. This talk explores how a conscious understanding of racial bias and AI technology should inform the work of policy makers and the society at large while building up the future of civil rights.

18:30-20:30 - PANEL

ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism

Dia Kayyali (Leader of the Tech & Advocacy program at WITNESS, SY/US/DE), Os Keyes (Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US), Dan McQuillan (Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK).
Moderated by Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE).

This panel focuses on the political aspects of AI and reflects on the unresolved injustices of our current system. What do we need to take into consideration when calling for a just AI? Do we need to implement changes in how we design AI, or shall we rather adopt a larger perspective and reflect on how human society is structured? According to Dan McQuillan, AI is political. Not only because of the question of what is to be done with it, but because of the political tendencies of the technology itself. Dia Kayyali will present ways in which AI is facilitating white supremacy, nationalism, racism and transphobia. They will focus on the ways in which AI is being developed and how it is being deployed, in particular for policing and content moderation - two seemingly disparate but politically linked applications. Conversations around AI bias tend to discuss differences in outcome between demographic categories, within notions of race, gender, sexuality, disability or class. Rather than treat these attributes as universal and unquestioned, opening up space for cultural imperialism and presumption in how we "fix" bias, Os Keyes will reflect on how contextually shape these issues. They will discuss how not to fall into the trap of universalise such concepts, arguing that a truly "just" or "equitable" AI must not just be "bias-free" - it must also be local, contextual and meaningfully shaped and controlled by those subject to it. Finally, focusing on machine learning and artificial neural networks, also known as deep learning, Dan McQuillan will reflect on how developing an antifascist AI, influencing our understanding of what is both possible and desirable, and what ought to be.


INSTALLATION: “We Need to Talk, AI” (2019)

A COMIC ESSAY ON ARTIFICIAL INTELLIGENCE by Julia Schneider and Lena Kadriye Ziyal

Everybody's talking about artificial intelligence. But what does it actually mean and what opportunities and risks do I associate with it? If you also ask yourself this question, a new comic essay can help you: We Need To Talk, AI. With intelligent explanations, loving illustrations and cats, a data expert and an artist explain AI. Why has AI been developed primarily by men so far - and what could that mean? How do we get clear responsibilities, more responsibility and more transparency in the AI business? Where are systems that not only pursue a commercial goal, but really want to help us live a better life? "We need new points of view and new perspectives in the debate," the authors say. With a doctorate in economics, Julia Schneider appreciates data and code as tools for solving complex puzzles - and loves comics as a medium for telling complex stories.  The artist Lena Kadriye Ziyal comes from the opposite direction. She likes to encode complexity with associations, thus adding her own perspective to the meaning of a subject. The 56-page comic wants to be a contribution to give more people an insight into the world of AI. On the project page, anyone can download it free of charge under a cc license, share it and further elaborate it artistically or order the printed edition for 11.99 euros. weneedtotalk.ai


SPEAKERS

Charlotte Webb

Co-founder,  Feminist Internet & Even Consultancy, UK.
Twitter:
@otheragent
Charlotte Webb is Co-founder and Chief Leopard of the Feminist Internet, a non-profit advancing internet equalities for women and other marginalised groups through creative, critical practice. She is also Founding Director of Even, an ethical technology consultancy. She was nominated by the Evening Standard as one of the most influential people in Technology and Science in London, 2018. She has been quoted/featured by CBBC, BBC Radio4, Evening Standard, Marie Claire and Londnr magazine and presented at TedX, Internet Age Media, Cannes Lions Festival of Creativity, Barbados Internet Governance Forum and Online Educa. She has 10+ years experience in art and design higher education with a focus on equipping students with the attributes needed to thrive in the creative digital industries.

Mutale Nkonde

Tech Policy Advisor and Fellow at Data & Society Research Institute, US.
Twitter:
@mutalenkonde
Mutale Nkonde is a tech policy advisor and fellow at Data & Society Research Institute in New York City. In April 2019 she was part of a team that introduced the Algorithmic Accountability Act to the House. Mutale is concerned about the information gaps in Congress and is conducting an ethnographic study on how Black law makers learn about the Impact of Artificial Intelligence on society.

Adam Harvey

Artist and Researcher on Computer Vision, Privacy and Surveillance, US/DE.
Twitter:
@adamhrv
Adam Harvey is an artist and researcher focused on privacy, surveillance, and computer vision. His past projects include developing camouflage techniques for subverting face detection, thermal imaging, and location tracking. He is the founder of vframe.io, a computer vision project for human rights researchers; and is currently a researcher-in-residence at Karlsruhe HfG. His most recent project MegaPixels explores the image datasets and information supply chains contributing to the growing crisis of authoritarian biometric surveillance technologies.

Sophie Searcy

Senior Data Scientist at Metis, US.
Twitter:
@artificialsoph
Sophie Searcy is a Senior Data Scientist at Metis, which provides full-time immersive bootcamps, part-time professional development courses, online learning, and corporate programs to accelerate the careers of data scientists.

Adriana Groh

Head of Program Management, Prototype Fund, DE.
Twitter:
@avwgo
Adriana Groh is the Director of the Prototype Fund, a funding program by the Open Knowledge Foundation and the Federal Ministry for Education and Research for software development in the field of Public Interest Tech. She co-founded wepublic, an app designed for digital dialogue between citizens and politicians and previously studied Public Policy and Governance in Maastricht. Her interest lies in the intersections of technology, policy and society.

Maya Indira Ganesh

Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE.
Twitter:
@mayameme

Maya has a hybrid portfolio of work spanning two decades with arts and culture organisations, in human rights and technology, with academia, NGOs, and industry. Her doctoral work proposes an ‘ethical apparatus’ emerging from the application of computational techniques and philosophical approaches to the shaping of machine autonomy. Taking the case of autonomous vehicles, she examines how ‘ethics’ is positioned as a device to gauge, and measure of, machine autonomy; but in the process, it becomes the subject of measurement itself. What is the role of the human, then, as computational systems are being imagined to regulate themselves? Maya’s other areas of research expertise include: gender, feminism and technology; big data and discrimination; digital security and privacy in human rights defence. She has worked with Tactical Tech, transmediale, Haus der Kulturen der Welt, the Citizen Lab at the University of Toronto, and UNICEF India, among others.

Slava Jankin

Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE.
Twitter:
@smych
Slava Jankin is Professor of Data Science and Public Policy at the Hertie School of Governance. He is the Director of the Hertie School Data Science Lab. His research and teaching is primarily in the field of natural language processing and machine learning. Before joining the Hertie School faculty, he was a Professor of Public Policy and Data Science at University of Essex, holding a joint appointment in the Institute for Analytics and Data Science and Department of Government. At Essex, Slava served as a Chief Scientific Adviser to Essex County Council, focusing on artificial intelligence and data science in public services. He previously worked at University College London and London School of Economics. Slava holds a PhD in Political Science from Trinity College Dublin.

Nicole Shephard

Researcher on Gender, Technology and Politics of Data, UK/DE.
Twitter:
@kilolo_
Nicole Shephard is an independent researcher, consultant and freelance writer who takes an intersectional feminist perspective on tech-related topics like the politics of data and algorithms, surveillance, online harassment, privacy as well as diversity and inclusion in the tech workplace. Currently she is launching an independent exploratory study on feminist data. She holds a PhD in Gender Studies from London School of Economics and Political Science (UK).

Crofton Black

Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK.
Twitter:
@cr0ft0n
Crofton Black is a researcher and writer with extensive experience of complex investigations in the field of human rights abuses and counter-terrorism. His work at The Bureau of Investigative Journalism has focused on military contractors, propaganda, automated social media and government IT systems, and he is currently leading a project exploring the use of big data and algorithmic systems in government. He is a leading expert on the CIA’s rendition, detention and interrogation programme. He has a PhD in the history of philosophy from the University of London and is co-author with photographer Edmund Clark of the award-winning ‘Negative Publicity: Artefacts of Extraordinary Rendition’. www.thebureauinvestigates.com

Daniel Eriksson

Head of Technology, Transparency International. SE/DE.
Twitter:
@spatialspook
Daniel Eriksson is the Head of Technology for Transparency International. Since starting his career as a bomb disposal specialist and peacekeeper with the Swedish army, mostly clearing minefields or other threats in the local communities in Bosnia, Eriksson has gone on to lead digitalisation efforts in international organisations, NGOs and governments in countries including Iraq and Afghanistan. He has been a technology executive in listed companies and the chief executive for one of India’s largest corporate security firms. His PhD explored the use of spatial algorithms in foreign aid.

Os Keyes

Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US.
Twitter:
@farbandish
Os Keyes is a PhD student at the University of Washington, where they study gender, data and infrastructures of control. Their work spans Science and Technology Studies, Gender Studies and Human-Computer Interaction, and currently focuses on examining the implications that facial recognition systems have for shaping human identity. They are also an essayist, with pieces published by Real Life, Slate, Scientific American and Logic Magazine, and an inaugural Ada Lovelace Fellow.

Dia Kayyali

Leader of the Tech & Advocacy program at WITNESS, SY/US/DE.
Twitter:
@DiaKayyali
Dia Kayyali coordinates WITNESS’ tech + advocacy work, engaging with technology companies and working on tools and policies that help human rights advocates safely, securely and ethically document human rights abuses and expose them to the world. Dia first recognized the need for documentation in the fight for human rights in high school when they got teargassed in the 1999 World Trade Organization protests. They’ve been engaged in activism ever since. Their interest in surveillance developed as a Syrian-American in post-9/11 USA. Before joining WITNESS, Dia worked as a fellow with Coding Rights, a Brazilian digital rights organization, researching surveillance and technology in the context of the 2016 Olympics. Dia served on the board of the National Lawyers Guild from 2010-2015.

Dan McQuillan

Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK.
Twitter:
@danmcquillan
Dan McQuillan is a Lecturer in Creative & Social Computing at Goldsmiths College, University of London. He has a PhD in Experimental Particle Physics, and prior to academia he worked as Amnesty International's Director of E-communications. Recent publications include 'Algorithmic States of Exception', 'Data Science as Machinic Neoplatonism' and 'People's Councils for Ethical Machine Learning'. He is co-founder of Science for Change Kosovo and Social Innovation Camp, which brought together ideas, people and digital tools to prototype solutions to social problems, and ran camps in different countries including Georgia, Armenia & Kyrgyzstan. He attended the G8 protest in Genoa in 2001 and was one of 93 people who were beaten, disappeared & tortured by the police.

Rhianna Ilube

Writer, Curator and Host at The Advocacy Academy, UK/DE.
rhiannailube.com
Rhianna Ilube is a writer, host and curator from London. She moved to Berlin in 2018 where she was a resident curator at be'kech anti-cafe in Wedding. She has curated and hosted a range of events and debates focused on the intersection of technology and politics, on topics such as disinformation, diaspora activism and filter bubbles. She also founded and hosts Berlin’s No Small Talk Party, and curated an event series on 'Afropean' identity. Rhianna is the co-host of the podcast Tanti Table, interviewing artists and activists from diaspora communities in Germany. She holds a Double First Class degree from Cambridge University in Politics, Psychology and Sociology.


Partner Logos for Web - AI TRAPS 190512.png